|
Internet research is the practice of using Internet information, especially free information on the World Wide Web, in research. It is: * focused and purposeful (so not recreational browsing), * uses Internet information or Internet-based resources (like Internet discussion forum) * tends towards the immediate (drawing answers from information you can access without delay) * and tends to access information without a purchase price. Internet research has had a profound impact on the way ideas are formed and knowledge is created. Common applications of ''Internet research'' include personal research on a particular subject (something mentioned on the news, a health problem, etc.), students doing research for academic projects and papers, and journalists and other writers researching stories. ''Research'' is a broad term. Here, it is used to mean "looking something up (on the Web)". It includes any activity where a topic is identified, and an effort is made to actively gather information for the purpose of furthering understanding. It may include some post-collection analysis like a concern for quality or synthesis. For example, on the Net, the Web can be searched and typically hundreds or thousands of pages can be found with some relation to the topic, within seconds. In addition, email (including mailing lists), online discussion forums (aka message boards, BBS's), and other personal communication facilities (instant messaging, IRC, newsgroups, etc.) can provide direct access to experts and other individuals with relevant interests and knowledge. So defined, Internet research is distinct from library research (focusing on library-bound resources) and commercial database research (focusing on commercial databases). While many commercial databases are delivered through the Internet, and some libraries purchase access to library databases on behalf of their patrons, searching such databases is generally not considered part of “Internet research”. It should also be distinguished from scientific research (research following a defined and rigorous process) carried out on the Internet, from straightforward retrieving of details like a name or phone number, and from research ''about'' the Internet. Internet research has strengths and weaknesses. Strengths include speed, immediacy, and a complete disregard for physical distance. The quality of research can be superior to other forms of research but usually is not. Weaknesses include unrecognized bias, difficulties in verifying a writer's credentials (and therefore the accuracy or pertinence of the information obtained) and whether the searcher has sufficient skill to draw meaningful results from the abundance of material typically available. The first resources retrieved may not be the most suitable resources to answer a particular question. For example, popularity is often a factor used in structuring Internet search results but popular information is not always most correct or representative of the breadth of knowledge and opinion on a topic. While conducting commercial research fosters a deep concern with costs, and library research fosters a concern with access, Internet research fosters a deep concern for quality, managing the abundance of information and with avoiding unintended bias. This is partly because Internet research occurs in a less mature information environment: an environment with less sophisticated / poorly communicated search skills and much less effort in organizing information. Library and commercial research has many search tactics and strategies unavailable on the Internet and the library and commercial environments invest more deeply in organizing and vetting their information. ==Search tools== The most popular search tools for finding information on the Internet include Web search engines, meta search engines, Web directories, and specialty search services. A Web search engine uses software known as a Web crawler to follow the hyperlinks connecting the pages on the World Wide Web. The information on these Web pages is indexed and stored by the search engine. To access this information, a user enters keywords in a search form and the search engine queries its algorithms, which take into consideration the location and frequency of keywords on a Web page, along with the quality and number of external hyperlinks pointing at the Web page. A Meta search engine enables users to enter a search query once and it runs against multiple search engines simultaneously, creating a list of aggregated search results. Since no single search engine covers the entire web, a meta search engine can produce a more comprehensive search of the web. Most meta search engines automatically eliminate duplicate search results. However, meta search engines have a significant limitation because the most popular search engines, such as Google, are not included because of legal restrictions. A Web directory organizes subjects in a hierarchical fashion that lets users investigate the breadth of a specific topic and drill down to find relevant links and content. Web directories can be assembled automatically by algorithms or handcrafted. Human-edited Web directories have the distinct advantage of higher quality and reliability, while those produced by algorithms can offer more comprehensive coverage. The scope of Web directories are generally broad, such as DOZ, Yahoo! and The WWW Virtual Library, covering a wide range of subjects, while others focus on specific topics. Specialty search tools enable users to find information that conventional search engines and meta search engines cannot access because the content is stored in databases. In fact, the vast majority of information on the web is stored in databases that require users to go to a specific site and access it through a search form. Often, the content is generated dynamically. As a consequence, Web crawlers are unable to index this information. In a sense, this content is "hidden" from search engines, leading to the term invisible or deep Web. Specialty search tools have evolved to provide users with the means to quickly and easily find deep Web content. These specialty tools rely on advanced bot and intelligent agent technologies to search the deep Web and automatically generate specialty Web directories, such as the Virtual Private Library. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「internet research」の詳細全文を読む スポンサード リンク
|